Many modern research fields increasingly rely on collecting and analysing massive, often unstructured, and unwieldy datasets. Consequently, there is growing interest in machine learning and artificial intelligence applications that can harness this `data deluge'. This broad nontechnical overview provides a gentle introduction to machine learning with a specific focus on medical and biological applications. We explain the common types of machine learning algorithms and typical tasks that can be solved, illustrating the basics with concrete examples from healthcare. Lastly, we provide an outlook on open challenges, limitations, and potential impacts of machine-learning-powered medicine.
translated by 谷歌翻译
Spatial understanding is a fundamental aspect of computer vision and integral for human-level reasoning about images, making it an important component for grounded language understanding. While recent large-scale text-to-image synthesis (T2I) models have shown unprecedented improvements in photorealism, it is unclear whether they have reliable spatial understanding capabilities. We investigate the ability of T2I models to generate correct spatial relationships among objects and present VISOR, an evaluation metric that captures how accurately the spatial relationship described in text is generated in the image. To benchmark existing models, we introduce a large-scale challenge dataset SR2D that contains sentences describing two objects and the spatial relationship between them. We construct and harness an automated evaluation pipeline that employs computer vision to recognize objects and their spatial relationships, and we employ it in a large-scale evaluation of T2I models. Our experiments reveal a surprising finding that, although recent state-of-the-art T2I models exhibit high image quality, they are severely limited in their ability to generate multiple objects or the specified spatial relations such as left/right/above/below. Our analyses demonstrate several biases and artifacts of T2I models such as the difficulty with generating multiple objects, a bias towards generating the first object mentioned, spatially inconsistent outputs for equivalent relationships, and a correlation between object co-occurrence and spatial understanding capabilities. We conduct a human study that shows the alignment between VISOR and human judgment about spatial understanding. We offer the SR2D dataset and the VISOR metric to the community in support of T2I spatial reasoning research.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
目的:用脑电图(脑电图)测量的稳态视觉诱发电势(SSVEP),在脑部计算机界面(BCI)拼写中产生不错的信息传输速率(ITR)。但是,文献中当前高性能的SSVEP BCI拼写器需要针对每个新用户进行系统适应的最初冗长而累人的用户特定培训,包括使用脑电图实验,算法培训和校准的数据收集(所有这些都是在实际使用之前系统)。这阻碍了BCI的广泛使用。为了确保实用性,我们提出了一种基于深神经网络(DNN)合​​奏的高度新颖的目标识别方法,该方法不需要任何特定于用户的培训。方法:我们从先前进行的脑电图实验的参与者中利用已经存在的文献数据集来训练全球目标标识符DNN,然后对每个参与者进行微调。我们将这种微调DNN的合奏转移到新的用户实例中,根据参与者与新用户的统计相似性确定k最具代表性的DNN,并通过集合预测的加权组合来预测目标角色。结果:在两个大规模基准和β数据集上,我们的方法可实现令人印象深刻的155.51位/分钟和114.64位/分钟ITR。代码可用于可重复性:https://github.com/osmanberke/ensemble-fnns结论:拟议的方法在[0.2-1.0]中的所有刺激持续时间上的所有最新替代方案都显着优于[0.2-1.0]秒。两个数据集。意义:我们的合奏-DNN方法有可能在日常生活中促进BCI拼写者的实际广泛部署,因为我们提供了最高的性能,同时无需任何特定于用户的培训即可立即使用。
translated by 谷歌翻译
有毒语言检测系统通常会错误地将包含少数群体群体提及的毒性的错误标记文本,因为这些群体通常是在线仇恨的目标。这种对虚假相关性的过度依赖也导致系统在检测隐式有毒语言方面挣扎。为了帮助缓解这些问题,我们创建了Toxigen,这是一个新的大规模和机器生成的数据集,该数据集是274K有毒和良性陈述,约有13个少数群体。我们开发了一个基于示范的提示框架和一种对抗性分类器的解码方法,以使用大量预处理的语言模型生成微妙的有毒和良性文本。以这种方式控制机器的生成使毒素可以比以前的人写文本的资源更大的规模和大约人口组覆盖隐式有毒文本。我们对毒素的一个充满挑战的子集进行人体评估,发现注释者难以区分机器生成的文本和人类写的语言。我们还发现,94.5%的有毒例子被人类注释者标记为仇恨言论。我们使用三个公开可用的数据集,我们表明,对我们的数据进行毒性分类器的填充可以大大提高其在人体编写数据上的性能。我们还证明,毒素可用于抵抗机器生成的毒性,因为鉴定在我们的评估子集中大大改善了分类器。我们的代码和数据可以在https://github.com/microsoft/toxigen上找到。
translated by 谷歌翻译
旨在累积学习的认知架构必须提供必要的信息和控制结构,以允许代理商从他们的经验中逐步学习。这涉及管理代理人的目标,并在其感知 - 认知信息堆栈中连续将感官信息与这些联系起来。学习代理的环境越多,越一般,灵活的必须是这些机制来处理更广泛的相关模式,任务和目标结构。虽然许多研究人员同意不同水平的信息可能在其化妆和结构和处理机制中不同,但对这些差异的详情同意通常不在研究界中共享。已经提出了二进制处理架构(通常称为System-1和系统-2)作为低级别信息的认知处理模型。我们以这种方式不存在认知并不是二进制文件,并且任何抽象级别的知识都涉及我们所指的是神经组织信息的信息,这意味着高水平和低级的数据必须包含符号和亚象的信息。此外,我们认为高和低水平数据抽象的处理之间的主要区分因子可以很大程度上归因于所涉及的注意机制的性质。我们描述了此观点背后的关键论据,并审查文献中的相关证据。
translated by 谷歌翻译
草图是视觉感知和粘合性建设的抽象表示。在这项工作中,我们提出了一个新的框架GaN-CNMP,它在CNMP上含有新的对抗性损失,以提高草图平滑度和一致性。通过实验,我们表明我们的模型可以训练,其中少量未标记的样本,可以在潜伏空间中自动构建分布,并在形状一致性和平滑方面产生比基础模型更好的结果。
translated by 谷歌翻译
在联合学习中,每个参与者通过自己的数据列举其本地模型,并通过聚合来自这些参与者的模型更新来在可信服务器上形成全局模型。由于服务器对参与者的培训程序没有影响和可见性以确保隐私,因此全球模型变得容易受到数据中毒和模型中毒等攻击的影响。虽然最近已经提出了许多防御算法来解决这些攻击,但它们往往会使强烈的假设与联邦学习的性质相吻,例如非IID数据集。此外,它们大多缺乏全面的实验分析。在这项工作中,我们提出了一种称为Barfed的防御算法,不会对数据分布,更新参与者的相似性或恶意参与者的比率作出任何假设。 Barfed主要考虑基于与全局模型的距离的模型架构的每个层的参与者更新的异常状态。因此,没有任何异常层的参与者都参与了模型聚合。我们在许多场所进行广泛的实验,并表明该方法为不同攻击提供了强大的防御。
translated by 谷歌翻译
对AI系统的分类评估,其中系统性能分别为不同的人分别评估和报告,在概念上简单。然而,他们的设计涉及各种选择。其中一些选择会影响将获得的结果,从而产生可以绘制的结论;其他人影响了有益和有害的影响 - 将分列的评估将对人们进行分类,包括其数据用于进行评估的人员。我们认为,更深入的了解这些选择将使研究人员和从业者能够设计仔细和决定性的分类评估。我们还争辩说,更好地记录这些选择,以及所做的潜在考虑因素和权衡,将在解释评估的结果和结论时帮助别人。
translated by 谷歌翻译